Restarting Frank–Wolfe: Faster Rates under Hölderian Error Bounds

نویسندگان

چکیده

Conditional gradient algorithms (aka Frank–Wolfe algorithms) form a classical set of methods for constrained smooth convex minimization due to their simplicity, the absence projection steps, and competitive numerical performance. While vanilla algorithm only ensures worst-case rate $${\mathcal {O}}(1/\epsilon )$$ , various recent results have shown that strongly functions on polytopes, method can be slightly modified achieve linear convergence. However, this still leaves huge gap between sublinear convergence {O}}(\log 1/\epsilon reach an $$\epsilon $$ -approximate solution. Here, we present new variant conditional dynamically adapt function’s geometric properties using restarts smoothly interpolates regimes. These interpolated rates are obtained when optimization problem satisfies type error bounds, which call strong Wolfe primal bounds. They combine information constraint with Hölderian bounds objective function.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Faster Subgradient Methods for Functions with Hölderian Growth

The purpose of this manuscript is to derive new convergence results for several subgradient methods for minimizing nonsmooth convex functions with Hölderian growth. The growth condition is satisfied in many applications and includes functions with quadratic growth and functions with weakly sharp minima as special cases. To this end there are four main contributions. First, for a constant and su...

متن کامل

Lower Bounds for BMRM and Faster Rates for Training SVMs

Regularized risk minimization with the binary hinge loss and its variants lies at the heart of many machine learning problems. Bundle methods for regularized risk minimization (BMRM) and the closely related SVMStruct are considered the best general purpose solvers to tackle this problem. It was recently shown that BMRM requires O(1/ε) iterations to converge to an ε accurate solution. In the fir...

متن کامل

Error Bounds in Parameter Estimation Under Mismatch

In this paper we develop a new upper bound for the mean square estimation error of a parameter that takes values on a bounded interval. The bound is based on the discretization of the region into a finite number of points, and the determination of the estimate by a maximum likelihood procedure. It is assumed that inaccurate versions of the true spectra are utilized in the implementation of the ...

متن کامل

New fractional error bounds for polynomial systems with applications to Hölderian stability in optimization and spectral theory of tensors

In this paper we derive new fractional error bounds for polynomial systems with exponents explicitly determined by the dimension of the underlying space and the number/degree of the involved polynomials. Our major result extends the existing error bounds from the system involving only a single polynomial to a general polynomial system and do not require any regularity assumptions. In this way w...

متن کامل

The Joy of Forgetting: Faster Anytime Search via Restarting

Anytime search algorithms solve optimisation problems by quickly finding a (usually suboptimal) first solution and then finding improved solutions when given additional time. To deliver an initial solution quickly, they are typically greedy with respect to the heuristic cost-to-go estimate h. In this paper, we show that this low-h bias can cause poor performance if the greedy search makes early...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Optimization Theory and Applications

سال: 2022

ISSN: ['0022-3239', '1573-2878']

DOI: https://doi.org/10.1007/s10957-021-01989-7